variance-invariance-covariance regularization
[2303.00633v1] An Information-Theoretic Perspective on Variance-Invariance-Covariance Regularization
In this paper, we provide an information-theoretic perspective on Variance-Invariance-Covariance Regularization (VICReg) for self-supervised learning. To do so, we first demonstrate how information-theoretic quantities can be obtained for deterministic networks as an alternative to the commonly used unrealistic stochastic networks assumption. Next, we relate the VICReg objective to mutual information maximization and use it to highlight the underlying assumptions of the objective. Based on this relationship, we derive a generalization bound for VICReg, providing generalization guarantees for downstream supervised learning tasks and present new self-supervised learning methods, derived from a mutual information maximization objective, that outperform existing methods in terms of performance. This work provides a new information-theoretic perspective on self-supervised learning and Variance-Invariance-Covariance Regularization in particular and guides the way for improved transfer learning via information-theoretic self-supervised learning objectives.
Yann LeCun Paper Rejected - Power Of Double-Blind Review
Yann Andre LeCun, a French computer scientist who focuses on machine learning, computer vision, mobile robotics, and computational neuroscience, recently tweeted that one of his articles has been rejected from NeurIPS 2021. Yann LeCun is a Silver Professor at New York University's Courant Institute of Mathematical Sciences and Vice President, Chief AI Scientist at Facebook. He is well-known for his work on optical character recognition and computer vision using convolutional neural networks (CNNs) and is often regarded as the inventor of convolutional nets. He is also a co-creator of the DjVu image compression technology. The author is a multifaceted individual with academic and industrial experience in artificial intelligence, machine learning, deep learning, computer vision, intelligent data analysis, data mining, data compression, digital library systems, and robotics.
Facebook Researcher's New Algorithm Ushers New Paradigm Of Image Recognition
"VICReg could be used to model the dependencies between a video clip and the frame that comes after, therefore learning to predict the future in a video." Humans have an innate capability to identify objects in the wild, even from a blurred glimpse of the thing. We do this efficiently by remembering only high-level features that get the job done (identification) and ignoring the details unless required. In the context of deep learning algorithms that do object detection, contrastive learning explored the premise of representation learning to obtain a large picture instead of doing the heavy lifting by devouring pixel-level details. But, contrastive learning has its own limitations.
- North America > United States > New York (0.05)
- Asia > India (0.05)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.55)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Scientific Discovery (0.41)
- Information Technology > Artificial Intelligence > Cognitive Science > Creativity & Intelligence (0.41)
- Information Technology > Artificial Intelligence > Machine Learning > Pattern Recognition > Image Matching (0.40)